29 research outputs found

    Benchmarking Encoder-Decoder Architectures for Biplanar X-ray to 3D Shape Reconstruction

    Full text link
    Various deep learning models have been proposed for 3D bone shape reconstruction from two orthogonal (biplanar) X-ray images. However, it is unclear how these models compare against each other since they are evaluated on different anatomy, cohort and (often privately held) datasets. Moreover, the impact of the commonly optimized image-based segmentation metrics such as dice score on the estimation of clinical parameters relevant in 2D-3D bone shape reconstruction is not well known. To move closer toward clinical translation, we propose a benchmarking framework that evaluates tasks relevant to real-world clinical scenarios, including reconstruction of fractured bones, bones with implants, robustness to population shift, and error in estimating clinical parameters. Our open-source platform provides reference implementations of 8 models (many of whose implementations were not publicly available), APIs to easily collect and preprocess 6 public datasets, and the implementation of automatic clinical parameter and landmark extraction methods. We present an extensive evaluation of 8 2D-3D models on equal footing using 6 public datasets comprising images for four different anatomies. Our results show that attention-based methods that capture global spatial relationships tend to perform better across all anatomies and datasets; performance on clinically relevant subgroups may be overestimated without disaggregated reporting; ribs are substantially more difficult to reconstruct compared to femur, hip and spine; and the dice score improvement does not always bring a corresponding improvement in the automatic estimation of clinically relevant parameters.Comment: accepted to NeurIPS 202

    Simulating Longitudinal Brain MRIs with known Volume Changes and Realistic Variations in Image Intensity

    Get PDF
    International audienceThis paper presents a simulator tool that can simulate large databases of visually realistic longitudinal MRIs with known volume changes. The simulator is based on a previously proposed biophysical model of brain deformation due to atrophy in AD. In this work, we propose a novel way of reproducing realistic intensity variation in longitudinal brain MRIs, which is inspired by an approach used for the generation of synthetic cardiac sequence images. This approach combines a deformation field obtained from the biophysical model with a deformation field obtained by a non-rigid registration of two images. The combined deformation field is then used to simulate a new image with specified atrophy from the first image, but with the intensity characteristics of the second image. This allows to generate the realistic variations present in real longitudinal time-series of images, such as the independence of noise between two acquisitions and the potential presence of variable acquisition artifacts. Various options available in the simulator software are briefly explained in this paper. In addition, the software is released as an open-source repository. The availability of the software allows researchers to produce tailored databases of images with ground truth volume changes; we believe this will help developing more robust brain morphometry tools. Additionally, we believe that the scientific community can also use the software to further experiment with the proposed model, and add more complex models of brain deformation and 18 atrophy generation

    Exploring Transfer Learning in Medical Image Segmentation using Vision-Language Models

    Full text link
    Medical Image Segmentation is crucial in various clinical applications within the medical domain. While state-of-the-art segmentation models have proven effective, integrating textual guidance to enhance visual features for this task remains an area with limited progress. Existing segmentation models that utilize textual guidance are primarily trained on open-domain images, raising concerns about their direct applicability in the medical domain without manual intervention or fine-tuning. To address these challenges, we propose using multimodal vision-language models for capturing semantic information from image descriptions and images, enabling the segmentation of diverse medical images. This study comprehensively evaluates existing vision language models across multiple datasets to assess their transferability from the open domain to the medical field. Furthermore, we introduce variations of image descriptions for previously unseen images in the dataset, revealing notable variations in model performance based on the generated prompts. Our findings highlight the distribution shift between the open-domain images and the medical domain and show that the segmentation models trained on open-domain images are not directly transferrable to the medical field. But their performance can be increased by finetuning them in the medical datasets. We report the zero-shot and finetuned segmentation performance of 4 Vision Language Models (VLMs) on 11 medical datasets using 9 types of prompts derived from 14 attributes.Comment: 25 pages, 9 figure

    Simulating Patient Specific Multiple Time-point MRIs From a Biophysical Model of Brain Deformation in Alzheimer's Disease

    Get PDF
    International audienceThis paper proposes a framework to simulate patient specific structural Magnetic Resonance Images (MRIs) from the available MRI scans of Alzheimer’s Disease(AD) subjects. We use a biophysical model of brain deformation due to atrophy that can generate biologically plausible deformation for any given desired volume changes at the voxel level of the brain MRI. Large number of brain regions are segmented in 45 AD patients and the atrophy rates per year are estimated in these regions from the available two extremal time-point scans. Assuming linear progression of atrophy, the volume changes in scans closest to the half way time period is computed. These atrophy maps are prescribedto the baseline images to simulate the middle time-point images by using the biophysical model of brain deformation. From the baseline scans,the volume changes in real middle time-point scans are compared to the ones in simulated middle time-point images. This present framework also allows to introduce desired atrophy patterns at different time-points to simulate non-linear progression of atrophy. This opens a way to use abiophysical model of brain deformation to evaluate methods that study the temporal progression and spatial relationships of atrophy of differentregions in the brain with AD

    A Biophysical Model of Shape Changes due to Atrophy in the Brain with Alzheimer's Disease

    Get PDF
    International audienceThis paper proposes a model of brain deformation triggered by atrophy in Alzheimer's Disease (AD). We introduce a macroscopic biophysical model assuming that the density of the brain remains constant, hence its volume shrinks when neurons die in AD. The deformation in the brain parenchyma minimizes the elastic strain energy with the prescribed local volume loss. The cerebrospinal fluid (CSF) is modelled differently to allow for fluid readjustments occuring at a much faster time-scale. PDEs describing the model is discretized in staggered grid and solved using Finite Difference Method. We illustrate the power of the model by showing different deformation patterns obtained for the same global atrophy but prescribed in gray matter (GM) or white matter (WM) on a generic atlas MRI, and with a realistic AD simulation on a subject MRI. This well-grounded forward model opens a way to study different hypotheses about the distribution of brain atrophy, and to study its impact on the observed changes in MR images

    Deep-learning assisted detection and quantification of (oo)cysts of Giardia and Cryptosporidium on smartphone microscopy images

    Full text link
    The consumption of microbial-contaminated food and water is responsible for the deaths of millions of people annually. Smartphone-based microscopy systems are portable, low-cost, and more accessible alternatives for the detection of Giardia and Cryptosporidium than traditional brightfield microscopes. However, the images from smartphone microscopes are noisier and require manual cyst identification by trained technicians, usually unavailable in resource-limited settings. Automatic detection of (oo)cysts using deep-learning-based object detection could offer a solution for this limitation. We evaluate the performance of three state-of-the-art object detectors to detect (oo)cysts of Giardia and Cryptosporidium on a custom dataset that includes both smartphone and brightfield microscopic images from vegetable samples. Faster RCNN, RetinaNet, and you only look once (YOLOv8s) deep-learning models were employed to explore their efficacy and limitations. Our results show that while the deep-learning models perform better with the brightfield microscopy image dataset than the smartphone microscopy image dataset, the smartphone microscopy predictions are still comparable to the prediction performance of non-experts.Comment: 18 pages (including supplementary information), 4 figures, 7 tables, submitting to Journal of Machine Learning for Biomedical Imagin

    EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers

    Get PDF
    Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis (PIPPI), 201
    corecore